13 research outputs found
On-chip Few-shot Learning with Surrogate Gradient Descent on a Neuromorphic Processor
Recent work suggests that synaptic plasticity dynamics in biological models
of neurons and neuromorphic hardware are compatible with gradient-based
learning (Neftci et al., 2019). Gradient-based learning requires iterating
several times over a dataset, which is both time-consuming and constrains the
training samples to be independently and identically distributed. This is
incompatible with learning systems that do not have boundaries between training
and inference, such as in neuromorphic hardware. One approach to overcome these
constraints is transfer learning, where a portion of the network is pre-trained
and mapped into hardware and the remaining portion is trained online. Transfer
learning has the advantage that pre-training can be accelerated offline if the
task domain is known, and few samples of each class are sufficient for learning
the target task at reasonable accuracies. Here, we demonstrate on-line
surrogate gradient few-shot learning on Intel's Loihi neuromorphic research
processor using features pre-trained with spike-based gradient
backpropagation-through-time. Our experimental results show that the Loihi chip
can learn gestures online using a small number of shots and achieve results
that are comparable to the models simulated on a conventional processor
Event-Based Angular Velocity Regression with Spiking Networks
Spiking Neural Networks (SNNs) are bio-inspired networks that process
information conveyed as temporal spikes rather than numeric values. A spiking
neuron of an SNN only produces a spike whenever a significant number of spikes
occur within a short period of time. Due to their spike-based computational
model, SNNs can process output from event-based, asynchronous sensors without
any pre-processing at extremely lower power unlike standard artificial neural
networks. This is possible due to specialized neuromorphic hardware that
implements the highly-parallelizable concept of SNNs in silicon. Yet, SNNs have
not enjoyed the same rise of popularity as artificial neural networks. This not
only stems from the fact that their input format is rather unconventional but
also due to the challenges in training spiking networks. Despite their temporal
nature and recent algorithmic advances, they have been mostly evaluated on
classification problems. We propose, for the first time, a temporal regression
problem of numerical values given events from an event camera. We specifically
investigate the prediction of the 3-DOF angular velocity of a rotating event
camera with an SNN. The difficulty of this problem arises from the prediction
of angular velocities continuously in time directly from irregular,
asynchronous event-based input. Directly utilising the output of event cameras
without any pre-processing ensures that we inherit all the benefits that they
provide over conventional cameras. That is high-temporal resolution,
high-dynamic range and no motion blur. To assess the performance of SNNs on
this task, we introduce a synthetic event camera dataset generated from
real-world panoramic images and show that we can successfully train an SNN to
perform angular velocity regression
Online Few-shot Gesture Learning on a Neuromorphic Processor
We present the Surrogate-gradient Online Error-triggered Learning (SOEL)
system for online few-shot learningon neuromorphic processors. The SOEL
learning system usesa combination of transfer learning and principles of
computa-tional neuroscience and deep learning. We show that partiallytrained
deep Spiking Neural Networks (SNNs) implemented onneuromorphic hardware can
rapidly adapt online to new classesof data within a domain. SOEL updates
trigger when an erroroccurs, enabling faster learning with fewer updates. Using
gesturerecognition as a case study, we show SOEL can be used for onlinefew-shot
learning of new classes of pre-recorded gesture data andrapid online learning
of new gestures from data streamed livefrom a Dynamic Active-pixel Vision
Sensor to an Intel Loihineuromorphic research processor.Comment: 10 pages, submitted to IEEE JETCAS for revie
NeuroBench: Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking
The field of neuromorphic computing holds great promise in terms of advancing
computing efficiency and capabilities by following brain-inspired principles.
However, the rich diversity of techniques employed in neuromorphic research has
resulted in a lack of clear standards for benchmarking, hindering effective
evaluation of the advantages and strengths of neuromorphic methods compared to
traditional deep-learning-based methods. This paper presents a collaborative
effort, bringing together members from academia and the industry, to define
benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are
to be a collaborative, fair, and representative benchmark suite developed by
the community, for the community. In this paper, we discuss the challenges
associated with benchmarking neuromorphic solutions, and outline the key
features of NeuroBench. We believe that NeuroBench will be a significant step
towards defining standards that can unify the goals of neuromorphic computing
and drive its technological progress. Please visit neurobench.ai for the latest
updates on the benchmark tasks and metrics
NeuroBench:Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking
The field of neuromorphic computing holds great promise in terms of advancing computing efficiency and capabilities by following brain-inspired principles. However, the rich diversity of techniques employed in neuromorphic research has resulted in a lack of clear standards for benchmarking, hindering effective evaluation of the advantages and strengths of neuromorphic methods compared to traditional deep-learning-based methods. This paper presents a collaborative effort, bringing together members from academia and the industry, to define benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are to be a collaborative, fair, and representative benchmark suite developed by the community, for the community. In this paper, we discuss the challenges associated with benchmarking neuromorphic solutions, and outline the key features of NeuroBench. We believe that NeuroBench will be a significant step towards defining standards that can unify the goals of neuromorphic computing and drive its technological progress. Please visit neurobench.ai for the latest updates on the benchmark tasks and metrics
Supervised learning in multilayer spiking neural network
Spiking Neural Networks (SNNs) are an exciting prospect in the field of Artificial Neural Networks (ANNs). We try to replicate the massive interconnection of neurons, the computational units, evident in brain to perform useful task in ANNs, albeit with highly abstracted model of neurons. Mostly the artificial neurons are realized in the form of non-linear activation function which process numeric inputs and output. SNNs are less abstract than these systems with non-linear activation function in the sense that they make use of mathematical model of neurons, termed spiking neurons, which process inputs in the form of spikes and emits spike as output. This is exactly the way in which natural neurons exchange information. Since spikes are events in time, there is an extra dimension of time along with amplitude in SNNs which makes them suited to temporal processes.
There are a few supervised learning algorithms for learning in SNN. As far as learning in multilayer architecture, we have SpikeProp and its extensions and Multi-ReS Me. The SpikeProp methods are based on adaptation of backpropagation for SNNs and mostly consider first spike of the neuron. Original SpikeProp is usually slow and face stability issues during learning. Large learning rate and even very small learning rate often makes it unstable. The instability is observable in the form of sudden jumps in training error, called surge, which change the course of learning and often cause failure of the learning process as well.
To introduce stability criterion, we present weight convergence analysis of SpikeProp. Based on the convergence condition, we introduce an adaptive learning rate rule which selects suitable learning rate to guarantee convergence of learning process and large enough learning rate so that the learning process is fast enough. Based on performance on several benchmark problems, this method with learning rate adaptation, SpikePropAd, demonstrates less surges and faster learning as well compared to SpikeProp and its faster variant RProp. The performance is evaluated broadly in terms of speed of learning, rate of successful learning.
We also consider the internal and external disturbances to the learning process and provide a thorough error analysis in addition to weight convergence analysis. We use conic sector stability theory to determine the conditions for making the learning process stable in L2 space and extend the result for L1 stability. L2 stability in theory requires the disturbance to die out after a certain period of time whereas the L1 stability implies that the system is stable provided the disturbance is within bounds. We explore two approaches for robust stability of SpikeProp in presence of disturbance: individual error approach, which leads to SpikePropR learning and total error approach, which leads to SpiekPropRT learnnig. SpikePropR has slight improvement over SpikePropAd. SpikePropRT on the other hand has significant improvement over SpikePropAd, especially for real world non synthetic datasets.
An event based weight update rule for learning spike-train rather than the time of first spike, EvSpikeProp, is also proposed. This method overcomes the limitations of other multi-spike extension of SpikeProp and is suitable for learning in an online fashion which is more suited to SNNs because spikes are continuous processes. The results derived in the convergence and stability analysis of SpikeProp are extended for multi-spike framework to show weight convergence and robust stability in L2 and L1 space. The resulting method is named EvSpikePropR. It shows better performance compared to Multi-ReSuMe based on the performance results for different learning problems.
Apart from that, we also extended the adaptive learning rule based on weight convergence for delay learning of SNN as well. It is named SpiekPropAdDel. This delay learning extension is useful because it speeds the learning process, eliminates redundant synapses and minimizes surge as well.Doctor of Philosophy (EEE
Robustness to training disturbances in SpikeProp Learning
Stability is a key issue during spiking neural network training using SpikeProp. The inherent nonlinearity of Spiking Neuron means that the learning manifold changes abruptly; therefore, we need to carefully choose the learning steps at every instance. Other sources of instability are the external disturbances that come along with training sample as well as the internal disturbances that arise due to modeling imperfection. The unstable learning scenario can be indirectly observed in the form of surges, which are sudden increases in the learning cost and are a common occurrence during SpikeProp training. Research in the past has shown that proper learning step size is crucial to minimize surges during training process. To determine proper learning step in order to avoid steep learning manifolds, we perform weight convergence analysis of SpikeProp learning in the presence of disturbance signals. The weight convergence analysis is further extended to robust stability analysis linked with overall system error. This ensures boundedness of the total learning error with minimal assumption of bounded disturbance signals. These analyses result in the learning rate normalization scheme, which are the key results of this paper. The performance of learning using this scheme has been compared with the prevailing methods for different benchmark data sets and the results show that this method has stable learning reflected by minimal surges during learning, higher success in training instances, and faster learning as well
Event-Based Angular Velocity Regression with Spiking Networks
Spiking Neural Networks (SNNs) are bio-inspired networks that process information conveyed as temporal spikes rather than numeric values. An example of a sensor providing such data is the event-camera. It only produces an event when a pixel reports a significant brightness change. Similarly, the spiking neuron of an SNN only produces a spike whenever a significant number of spikes occur within a short period of time. Due to their spike-based computational model, SNNs can process output from event-based, asynchronous sensors without any pre-processing at extremely lower power unlike standard artificial neural networks. This is possible due to specialized neuromorphic hardware that implements the highly-parallelizable concept of SNNs in silicon. Yet, SNNs have not enjoyed the same rise of popularity as artificial neural networks. This not only stems from the fact that their input format is rather unconventional but also due to the challenges in training spiking networks. Despite their temporal nature and recent algorithmic advances, they have been mostly evaluated on classification problems. We propose, for the first time, a temporal regression problem of numerical values given events from an event-camera. We specifically investigate the prediction of the 3- DOF angular velocity of a rotating event-camera with an SNN. The difficulty of this problem arises from the prediction of angular velocities continuously in time directly from irregular, asynchronous event-based input. Directly utilising the output of event-cameras without any pre-processing ensures that we inherit all the benefits that they provide over conventional cameras. That is high-temporal resolution, high-dynamic range and no motion blur. To assess the performance of SNNs on this task, we introduce a synthetic event-camera dataset generated from real-world panoramic images and show that we can successfully train an SNN to perform angular velocity regression